Budget based dynamic slot allocation for MapReduce clusters

نویسنده

  • S. Janani
چکیده

MapReduce is one of the programming models for processing large amount of data in cloud where resource allocation is one of the research areas since it is responsible for improving the performance of Hadoop. However the resource allocation can be further improved by focusing on a set of mechanisms, that includes the budget based HFS algorithm where the fast worker node is identified first based on the budget, and then Dynamic Hadoop Slot Allocation (DHSA) is used for allocation of the slots, and Longest Approximate Time to End (LATE) is used to handle the speculative tasks that are identified. Finally the overall performance is compared with the existing mechanisms using the Hadoop simulation tool.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimization Framework for Map Reduce Clusters on Hadoop’s Configuration

ARTICLE INFO Hadoop represents a Java-based distributed computing framework that is designed to support applications that are implemented via the MapReduce programming model. Hadoop performance however is significantly affected by the settings of the Hadoop configuration parameters. Unfortunately, manually tuning these parameters is very time-consuming. Existing system uses Random forest approa...

متن کامل

MROrchestrator: A Fine-Grained Resource Orchestration Framework for Hadoop MapReduce

Efficient resource management in data centers and clouds running large distributed data processing frameworks like Hadoop is crucial for enhancing the performance of hosted MapReduce applications, and boosting the resource utilization. However, existing resource scheduling schemes in Hadoop allocate resources at the granularity of fixed-size, static portions of the nodes, called slots. A slot r...

متن کامل

FLEX: A Slot Allocation Scheduling Optimizer for MapReduce Workloads

Originally, MapReduce implementations such as Hadoop employed First In First Out (fifo) scheduling, but such simple schemes cause job starvation. The Hadoop Fair Scheduler (hfs) is a slot-based MapReduce scheme designed to ensure a degree of fairness among the jobs, by guaranteeing each job at least some minimum number of allocated slots. Our prime contribution in this paper is a different, fle...

متن کامل

Dynamically Scheduling a Component-Based Framework in Clusters

In many clusters and datacenters, application frameworks are used that offer programming models such as Dryad and MapReduce, and jobs submitted to the clusters or datacenters may be targeted at specific instances of these frameworks, for example because of the presence of certain data. An important question that then arises is how to allocate resources to framework instances that may have highl...

متن کامل

On Dynamic Job Ordering and Slot Configurations for Minimizing the Makespan Of Multiple MapReduce Jobs

MapReduce is a popular parallel computing paradigm for Big Data processing in clusters and data centers. It is observed that different job execution orders and MapReduce slot configurations for a MapReduce workload have significantly different performance with regarding to the makespan, total completion time, system utilization and other performance metrics. There are quite a few algorithms on ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016